73 research outputs found
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
Verifying robustness of neural network classifiers has attracted great
interests and attention due to the success of deep neural networks and their
unexpected vulnerability to adversarial perturbations. Although finding minimum
adversarial distortion of neural networks (with ReLU activations) has been
shown to be an NP-complete problem, obtaining a non-trivial lower bound of
minimum distortion as a provable robustness guarantee is possible. However,
most previous works only focused on simple fully-connected layers (multilayer
perceptrons) and were limited to ReLU activations. This motivates us to propose
a general and efficient framework, CNN-Cert, that is capable of certifying
robustness on general convolutional neural networks. Our framework is general
-- we can handle various architectures including convolutional layers,
max-pooling layers, batch normalization layer, residual blocks, as well as
general activation functions; our approach is efficient -- by exploiting the
special structure of convolutional layers, we achieve up to 17 and 11 times of
speed-up compared to the state-of-the-art certification algorithms (e.g.
Fast-Lin, CROWN) and 366 times of speed-up compared to the dual-LP approach
while our algorithm obtains similar or even better verification bounds. In
addition, CNN-Cert generalizes state-of-the-art algorithms e.g. Fast-Lin and
CROWN. We demonstrate by extensive experiments that our method outperforms
state-of-the-art lower-bound-based certification algorithms in terms of both
bound quality and speed.Comment: Accepted by AAAI 201
Corrupting Neuron Explanations of Deep Visual Features
The inability of DNNs to explain their black-box behavior has led to a recent
surge of explainability methods. However, there are growing concerns that these
explainability methods are not robust and trustworthy. In this work, we perform
the first robustness analysis of Neuron Explanation Methods under a unified
pipeline and show that these explanations can be significantly corrupted by
random noises and well-designed perturbations added to their probing data. We
find that even adding small random noise with a standard deviation of 0.02 can
already change the assigned concepts of up to 28% neurons in the deeper layers.
Furthermore, we devise a novel corruption algorithm and show that our algorithm
can manipulate the explanation of more than 80% neurons by poisoning less than
10% of probing data. This raises the concern of trusting Neuron Explanation
Methods in real-life safety and fairness critical applications
Efficient Neural Network Robustness Certification with General Activation Functions
Finding minimum distortion of adversarial examples and thus certifying
robustness in neural network classifiers for given data points is known to be a
challenging problem. Nevertheless, recently it has been shown to be possible to
give a non-trivial certified lower bound of minimum adversarial distortion, and
some recent progress has been made towards this direction by exploiting the
piece-wise linear nature of ReLU activations. However, a generic robustness
certification for general activation functions still remains largely
unexplored. To address this issue, in this paper we introduce CROWN, a general
framework to certify robustness of neural networks with general activation
functions for given input data points. The novelty in our algorithm consists of
bounding a given activation function with linear and quadratic functions, hence
allowing it to tackle general activation functions including but not limited to
four popular choices: ReLU, tanh, sigmoid and arctan. In addition, we
facilitate the search for a tighter certified lower bound by adaptively
selecting appropriate surrogates for each neuron activation. Experimental
results show that CROWN on ReLU networks can notably improve the certified
lower bounds compared to the current state-of-the-art algorithm Fast-Lin, while
having comparable computational efficiency. Furthermore, CROWN also
demonstrates its effectiveness and flexibility on networks with general
activation functions, including tanh, sigmoid and arctan.Comment: Accepted by NIPS 2018. Huan Zhang and Tsui-Wei Weng contributed
equall
Concept-Monitor: Understanding DNN training through individual neurons
In this work, we propose a general framework called Concept-Monitor to help
demystify the black-box DNN training processes automatically using a novel
unified embedding space and concept diversity metric. Concept-Monitor enables
human-interpretable visualization and indicators of the DNN training processes
and facilitates transparency as well as deeper understanding on how DNNs
develop along the during training. Inspired by these findings, we also propose
a new training regularizer that incentivizes hidden neurons to learn diverse
concepts, which we show to improve training performance. Finally, we apply
Concept-Monitor to conduct several case studies on different training paradigms
including adversarial training, fine-tuning and network pruning via the Lottery
Ticket Hypothesi
Stochastic simulation and robust design optimization of integrated photonic filters
Manufacturing variations are becoming an unavoidable issue in modern fabrication processes; therefore, it is crucial to be able to include stochastic uncertainties in the design phase. In this paper, integrated photonic coupled ring resonator filters are considered as an example of significant interest. The sparsity structure in photonic circuits is exploited to construct a sparse combined generalized polynomial chaos model, which is then used to analyze related statistics and perform robust design optimization. Simulation results show that the optimized circuits are more robust to fabrication process variations and achieve a reduction of 11%–35% in the mean square errors of the 3 dB bandwidth compared to unoptimized nominal designs.MIT Skoltech InitiativeProgetto Roberto Rocca (Seed Funds)National Science Foundation (U.S.) (AIM Photonics Center. Contract 1227020-EEC)Semiconductor Research Corporatio
Prediction without Preclusion: Recourse Verification with Reachable Sets
Machine learning models are often used to decide who will receive a loan, a
job interview, or a public benefit. Standard techniques to build these models
use features about people but overlook their actionability. In turn, models can
assign predictions that are fixed, meaning that consumers who are denied loans,
interviews, or benefits may be permanently locked out from access to credit,
employment, or assistance. In this work, we introduce a formal testing
procedure to flag models that assign fixed predictions that we call recourse
verification. We develop machinery to reliably determine if a given model can
provide recourse to its decision subjects from a set of user-specified
actionability constraints. We demonstrate how our tools can ensure recourse and
adversarial robustness in real-world datasets and use them to study the
infeasibility of recourse in real-world lending datasets. Our results highlight
how models can inadvertently assign fixed predictions that permanently bar
access, and we provide tools to design algorithms that account for
actionability when developing models
Sequences of Numbers Meet the Generalized Gegenbauer-Humbert Polynomials
Here we present a connection between a sequence of numbers generated by a linear recurrence relation of order 2 and sequences of the generalized Gegenbauer-Humbert polynomials. Many new and known formulas of the Fibonacci, the Lucas, the Pell, and the Jacobsthal numbers in terms of the generalized Gegenbauer-Humbert polynomial values are given. The applications of the relationship to the construction of identities of number and polynomial value sequences defined by linear recurrence relations are also discussed
- …